Performance Comparison of Cray X 1 and Cray Opteron Cluster with Other Leading Platforms using HPCC and IMB Benchmarks

نویسندگان

  • Subhash Saini
  • Rolf Rabenseifner
  • Brian T. N. Gunney
  • Thomas E. Spelce
  • Alice Koniges
  • Don Dossa
  • Panagiotis Adamidis
  • Robert Ciotti
  • Sunil R. Tiyyagura
  • Matthias Müller
  • Rod Fatoohi
چکیده

The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of six leading supercomputers SGI Altix BX2, Cray X1, Cray Opteron Cluster, Dell Xeon cluster, NEC SX-8 and IBM Blue Gene/L. These six systems use also six different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, NEC IXS and IBM Blue Gen/L Torus). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on five of these systems.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Performance Comparison of Cray X 1 and Cray Opteron Cluster with Other Leading Platforms using HPCC and

The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of six leading supercomputers SGI Altix BX2, Cray X1, Cray Opteron Cluster, Dell Xeon cluster, NEC SX-8, Cray XT3 and IBM Blue Gene/L. These six systems use also six different networks (SGI NUMALINK4, Cray net...

متن کامل

Interconnect Performance Evaluation of SGI Altix 3700 Cray X1, Cray Opteron, and Dell PowerEdge

We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare between these interconnects. We used several benchmarks to examine network behavior under different communication patterns and numbe...

متن کامل

Understanding the Cray X1 System

This paper helps the reader understand the characteristics of the Cray X1 vector supercomputer system, and provides hints and information to enable the reader to port codes to the system. It provides a comparison between the basic performance of the X1 platform and other platforms available at NASA Ames Research Center. A set of codes, solving the Laplacian equation with different parallel para...

متن کامل

A Scalability Study of Columbia using the NAS Parallel Benchmarks

The Columbia system at the NASA Advanced Supercomputing (NAS) facility is a cluster of 20 SGI Altix nodes, each with 512 Itanium 2 processors and 1 terabyte (TB) of shared-access memory. Four of the nodes are organized as a 2048-processor capability-computing platform connected by two low-latency interconnects— NUMALink4 (NL4) and InfiniBand (IB). To evaluate the scalability of Columbia with re...

متن کامل

Holistic Hardware Counter Performance Analysis of Parallel Programs

The KOJAK toolkit has been augmented with refined hardware performance counter support, including more convenient measurement specification, additional metric derivations and hierarchical structuring, and an extended algebra for integrating multiple experiments. Comprehensive automated analysis of a hybrid OpenMP/MPI parallel program, the ASC Purple sPPM benchmark, is demonstrated with performa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006